31 research outputs found

    New Cross-Layer Channel Switching Policy for TCP Transmission on 3G UMTS Downlink

    Get PDF
    In 3G UMTS, two main transport channels have been provided for downlink data transmission: a common FACH channel and a dedicated DCH channel. The performance of TCP in UMTS depends much on the channel switching policy used. In this paper, we propose and analyze three new basic threshold-based channel switching policies for UMTS that we name as QS (Queue Size), FS (Flow Size) and QSFS (QS & FS combined) policy. These policies significantly improve over a modified threshold policy in [1] by about 17% in response time metrics. We further propose and evaluate a new improved switching policy that we call FS-DCH (at-least flow-size threshold on DCH) policy. This policy is biased towards short TCP flows of few packets and is thus a cross-layer policy that improves the performance of TCP by giving priority to the initial few packets of a flow on the fast DCH channel. Extensive simulation results confirm this improvement for the case when number of TCP connections is low

    Impact of IT Monoculture on Behavioral End Host Intrusion Detection

    Get PDF
    International audienceIn this paper, we study the impact of today's IT policies, defined based upon a monoculture approach, on the performance of endhost anomaly detectors. This approach leads to the uniform configuration of Host intrusion detection systems (HIDS) across all hosts in an enterprise networks. We assess the performance impact this policy has from the individual's point of view by analyzing network traces collected from 350 enterprise users. We uncover a great deal of diversity in the user population in terms of the “tail†behavior, i.e., the component which matters for anomaly detection systems. We demonstrate that the monoculture approach to HIDS configuration results in users that experience wildly different false positive and false negatives rates. We then introduce new policies, based upon leveraging this diversity and show that not only do they dramatically improve performance for the vast majority of users, but they also reduce the number of false positives arriving in centralized IT operation centers, and can reduce attack strength

    Impact of IT Monoculture on Behavioral End Host Intrusion Detection

    Get PDF
    International audienceIn this paper, we study the impact of today's IT policies, defined based upon a monoculture approach, on the performance of endhost anomaly detectors. This approach leads to the uniform configuration of Host intrusion detection systems (HIDS) across all hosts in an enterprise networks. We assess the performance impact this policy has from the individual's point of view by analyzing network traces collected from 350 enterprise users. We uncover a great deal of diversity in the user population in terms of the “tail†behavior, i.e., the component which matters for anomaly detection systems. We demonstrate that the monoculture approach to HIDS configuration results in users that experience wildly different false positive and false negatives rates. We then introduce new policies, based upon leveraging this diversity and show that not only do they dramatically improve performance for the vast majority of users, but they also reduce the number of false positives arriving in centralized IT operation centers, and can reduce attack strength

    Challenges in Security and Traffic Management in Enterprise Networks

    No full text
    Management of enterprise networks is a challenging problem because of their continued growth in size and functionality. We propose and evaluate a framework, Godai , which addresses the challenges in (i) setting thresholds in end host anomaly detectors,(ii) hierarchical summarization in data and (ii) application traffic classification. Godai enables IT operators to identify the end hosts that have been enslaved by an attacker to launch attacks and Godai achieves it by diversifying anomaly detector configuration. The general policies in the framework are holistic and achieve two goals: (a)balance the trade-offs between false alarm and mis-detection rates and (b) show that the benefits of full diversity can be attained at reduced complexity, by clustering the end hosts and treating a cluster homogeneously.The underlying principle of attack detection is to identify changes in data. Godai generalizes the concept for data with hierarchical identifiers, e.g., IP prefixes, URLs. A parsimonious hierarchical summarization eases the burden on IT operators to interprete analysis reports. Godai proposes efficient and provable algorithms to produce parsimonious explanations from the output of any statistical model that provides predictions and confidence intervals, making it widely applicable. Finally, Godai takes a step towards associating applications to traffic flows. It critically re-visits the existing ad hoc techniques of traffic classification approaches based on transport layer ports, host behavior and flow features and analyzes the effectiveness of different approaches. The results allow us to answer questions about the best available traffic classification approach, the conditions under which it performs well, and the strengths and limitations of each approach. The multifarious functionalities allow Godai to be a viable solution in enterprise network management

    TCP Optimization through FEC, ARQ and Transmission Power Tradeoffs

    No full text
    TCP performance degrades when end-to-end connections extend over wireless connections --- links which are characterized by high bit error rate and intermittent connectivity. Such link characteristics can significantly degrade TCP performance as the TCP sender assumes wireless losses to be congestion losses resulting in unnecessary congestion control actions. Link errors can be reduced by increasing transmission power, code redundancy (FEC) or number of retransmissions (ARQ). But increasing power costs resources, increasing code redundancy reduces available channel bandwidth and increasing persistency increases end-to-end delay. The paper proposes a TCP optimization through proper tuning of power management, FEC and ARQ in wireless environments (WLAN and WWAN)

    Effectiveness of Loss Labeling in Improving TCP Performance in Wired/Wireless Networks

    No full text
    The current congestion-oriented design of TCP hinders its ability to perform well in hybrid wireless/wired networks. We propose a new improvement on TCP NewReno (NewReno-FF) using a new loss labeling technique to discriminate wireless from congestion losses. The proposed technique is based on the estimation of average and variance of the round trip time using a filter called Flip Flop filter that is augmented with history information. We show the comparative performance of TCP NewReno, NewReno-FF, and TCP Westwood through extensive simulations. We study the fundamental gains and limits using TCP NewReno with varying Loss Labeling accuracy (NewReno-LL) as a benchmark. Lastly our investigation opens up important research directions. First, there is a need for a ner grained classification of losses (even within congestion and wireless losses) for TCP in heterogeneous networks. Second, it is essential to develop an appropriate control strategy for recovery after the correct classification of a packet loss

    A Bayesian Approach for TCP to Distinguish Congestion from Wireless Losses

    No full text
    The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. In this paper, we investigate a Bayesian approach to infer at the source host the reason of a packet loss, whether congestion or wireless transmission error. Our approach is "mostly" end-to-end since it requires only one long-term average quantity (namely, long-term average packet loss probability over the wireless segment) that may be best obtained with help from the network (e.g. wireless access agent). Specifically, we use Maximum Likelihood Ratio tests to evaluate TCP as a classifier of the type of packet loss. We study the e#ectiveness of short-term classification of packet errors (congestion vs. wireless), given stationary prior error probabilities and distributions of packet delays conditioned on the type of packet loss (measured over a longer time scale). Using our Bayesian-based approach and extensive simulations, we demonstrate that an e#cient online error classifier can be built as long as congestion-induced losses and losses due to wireless transmission errors produce su#ciently di#erent statistics. We introduce a simple queueing model to underline the conditional delay distributions arising from di#erent kinds of packet losses over a heterogeneous wired/wireless path. To infer conditional delay distributions, we consider Hidden Markov Model (HMM) which explicitly considers discretized delay values observed by TCP as part of its state definition, in addition to an HMM which does not as in [9]. We demonstrate how estimation accuracy is influenced by di#erent proportions of congestion versus wireless losses and penalt..
    corecore